6 research outputs found

    Neuron-Astrocyte Associative Memory

    Full text link
    Astrocytes, a unique type of glial cell, are thought to play a significant role in memory due to their involvement in modulating synaptic plasticity. Nonetheless, no existing theories explain how neurons, synapses, and astrocytes could collectively contribute to memory function. To address this, we propose a biophysical model of neuron-astrocyte interactions that unifies various viewpoints on astrocyte function in a principled, biologically-grounded framework. A key aspect of the model is that astrocytes mediate long-range interactions between distant tripartite synapses. This effectively creates ``multi-neuron synapses" where more than two neurons interact at the same synapse. Such multi-neuron synapses are ubiquitous in models of Dense Associative Memory (also known as Modern Hopfield Networks) and are known to lead to superlinear memory storage capacity, which is a desirable computational feature. We establish a theoretical relationship between neuron-astrocyte networks and Dense Associative Memories and demonstrate that neuron-astrocyte networks have a larger memory storage capacity per compute unit compared to previously published biological implementations of Dense Associative Memories. This theoretical correspondence suggests the exciting hypothesis that memories could be stored, at least partially, within astrocytes instead of in the synaptic weights between neurons. Importantly, the many-neuron synapses can be influenced by feedforward signals into the astrocytes, such as neuromodulators, potentially originating from distant neurons.Comment: 18 pages, 2 figure

    Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamical Similarity Analysis

    Full text link
    How can we tell whether two neural networks are utilizing the same internal processes for a particular computation? This question is pertinent for multiple subfields of both neuroscience and machine learning, including neuroAI, mechanistic interpretability, and brain-machine interfaces. Standard approaches for comparing neural networks focus on the spatial geometry of latent states. Yet in recurrent networks, computations are implemented at the level of neural dynamics, which do not have a simple one-to-one mapping with geometry. To bridge this gap, we introduce a novel similarity metric that compares two systems at the level of their dynamics. Our method incorporates two components: Using recent advances in data-driven dynamical systems theory, we learn a high-dimensional linear system that accurately captures core features of the original nonlinear dynamics. Next, we compare these linear approximations via a novel extension of Procrustes Analysis that accounts for how vector fields change under orthogonal transformation. Via four case studies, we demonstrate that our method effectively identifies and distinguishes dynamic structure in recurrent neural networks (RNNs), whereas geometric methods fall short. We additionally show that our method can distinguish learning rules in an unsupervised manner. Our method therefore opens the door to novel data-driven analyses of the temporal structure of neural computation, and to more rigorous testing of RNNs as models of the brain.Comment: 21 pages, 10 figure

    Recursive Construction of Stable Assemblies of Recurrent Neural Networks

    Full text link
    Advanced applications of modern machine learning will likely involve combinations of trained networks, as are already used in spectacular systems such as DeepMind's AlphaGo. Recursively building such combinations in an effective and stable fashion while also allowing for continual refinement of the individual networks - as nature does for biological networks - will require new analysis tools. This paper takes a step in this direction by establishing contraction properties of broad classes of nonlinear recurrent networks and neural ODEs, and showing how these quantified properties allow in turn to recursively construct stable networks of networks in a systematic fashion. The results can also be used to stably combine recurrent networks and physical systems with quantified contraction properties. Similarly, they may be applied to modular computational models of cognition. We perform experiments with these combined networks on benchmark sequential tasks (e.g permuted sequential MNIST) to demonstrate their capacity for processing information across a long timescale in a provably stable manner.Comment: 23 pages, 3 figure

    Achieving stable dynamics in neural circuits.

    No full text
    The brain consists of many interconnected networks with time-varying, partially autonomous activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable, reproducible state (or sequence of states) for its computations to make sense. We approached this problem from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable computations might be achieved despite biological complexity. Crucially, our analysis is not limited to analyzing the stability of fixed geometric objects in state space (e.g points, lines, planes), but rather the stability of state trajectories which may be complex and time-varying
    corecore